|
Berkson's paradox also known as Berkson's bias or Berkson's fallacy is a result in conditional probability and statistics which is counterintuitive for some people, and hence a veridical paradox. It is a complicating factor arising in statistical tests of proportions. Specifically, it arises when there is an ascertainment bias inherent in a study design. The effect is related to the explaining away phenomenon in Bayesian networks. It is often described in the fields of medical statistics or biostatistics, as in the original description of the problem by Joseph Berkson. ==Statement== The result is that two independent events become conditionally dependent (negatively dependent) given that at least one of them occurs. Symbolically: :if 0 < P(''A'') < 1 and 0 < P(''B'') < 1 ::(each event may or may not occur), :and P(''A''|''B'') = P(''A'') ::(they are independent), :then P(''A''|''B'',''C'') < P(''A''|''C'') where ''C'' = ''A''u''B'' (i.e. ''A'' or ''B'') ::(A is less likely to occur given that A or B occurs and given that B occurs, than if simply A or B occurs). In words, given two independent events, if you only consider outcomes where at least one occurs, then they become negatively dependent. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Berkson's paradox」の詳細全文を読む スポンサード リンク
|